Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Cult Cogn Sci ; 6(3): 251-268, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35996660

RESUMO

This study investigated the universality of emotional prosody in perception of discrete emotions when semantics is not available. In two experiments the perception of emotional prosody in Hebrew and German by listeners who speak one of the languages but not the other was investigated. Having a parallel tool in both languages allowed to conduct controlled comparisons. In Experiment 1, 39 native German speakers with no knowledge of Hebrew and 80 native Israeli speakers rated Hebrew sentences spoken with four different emotional prosodies (anger, fear, happiness, sadness) or neutral. The Hebrew version of the Test for Rating of Emotions in Speech (T-RES) was used for this purpose. Ratings indicated participants' agreement on how much the sentence conveyed each of four discrete emotions (anger, fear, happiness and sadness). In Experient 2, 30 native speakers of German, and 24 Israeli native speakers of Hebrew who had no knowledge of German rated sentences of the German version of the T-RES. Based only on the prosody, German-speaking participants were able to accurately identify the emotions in the Hebrew sentences and Hebrew-speaking participants were able to identify the emotions in the German sentences. In both experiments ratings between the groups were similar. These findings show that individuals are able to identify emotions in a foreign language even if they do not have access to semantics. This ability goes beyond identification of target emotion; similarities between languages exist even for "wrong" perception. This adds to accumulating evidence in the literature on the universality of emotional prosody. Supplementary Information: The online version contains supplementary material available at 10.1007/s41809-022-00107-x.

2.
JMIR Serious Games ; 10(3): e32297, 2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35900825

RESUMO

BACKGROUND: The number of serious games for cognitive training in aging (SGCTAs) is proliferating in the market and attempting to combat one of the most feared aspects of aging-cognitive decline. However, the efficacy of many SGCTAs is still questionable. Even the measures used to validate SGCTAs are up for debate, with most studies using cognitive measures that gauge improvement in trained tasks, also known as near transfer. This study takes a different approach, testing the efficacy of the SGCTA-Effectivate-in generating tangible far-transfer improvements in a nontrained task-the Eye tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL)-which tests speech processing in adverse conditions. OBJECTIVE: This study aimed to validate the use of a real-time measure of speech processing as a gauge of the far-transfer efficacy of an SGCTA designed to train executive functions. METHODS: In a randomized controlled trial that included 40 participants, we tested 20 (50%) older adults before and after self-administering the SGCTA Effectivate training and compared their performance with that of the control group of 20 (50%) older adults. The E-WINDMIL eye-tracking task was administered to all participants by blinded experimenters in 2 sessions separated by 2 to 8 weeks. RESULTS: Specifically, we tested the change between sessions in the efficiency of segregating the spoken target word from its sound-sharing alternative, as the word unfolds in time. We found that training with the SGCTA Effectivate improved both early and late speech processing in adverse conditions, with higher discrimination scores in the training group than in the control group (early processing: F1,38=7.371; P=.01; ηp2=0.162 and late processing: F1,38=9.003; P=.005; ηp2=0.192). CONCLUSIONS: This study found the E-WINDMIL measure of speech processing to be a valid gauge for the far-transfer effects of executive function training. As the SGCTA Effectivate does not train any auditory task or language processing, our results provide preliminary support for the ability of Effectivate to create a generalized cognitive improvement. Given the crucial role of speech processing in healthy and successful aging, we encourage researchers and developers to use speech processing measures, the E-WINDMIL in particular, to gauge the efficacy of SGCTAs. We advocate for increased industry-wide adoption of far-transfer metrics to gauge SGCTAs.

3.
Front Neurosci ; 16: 846117, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35546888

RESUMO

Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults' sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of -15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.

4.
Int J Audiol ; 60(5): 319-321, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33063553

RESUMO

OBJECTIVE: COVID-19 social isolation restrictions have accelerated the need to adapt clinical assessment tools to telemedicine. Remote adaptations are of special importance for populations at risk, e.g. older adults and individuals with chronic medical comorbidities. In response to this urgent clinical and scientific need, we describe a remote adaptation of the T-RES (Oron et al. 2020; IJA), designed to assess the complex processing of spoken emotions, based on identification and integration of the semantics and prosody of spoken sentences. DESIGN: We present iT-RES, an online version of the speech-perception assessment tool, detailing the challenges considered and solution chosen when designing the telehealth tool. We show a preliminary validation of performance against the original lab-based T-RES. STUDY SAMPLE: A between-participants design, within two groups of 78 young adults (T-RES, n = 39; iT-RES, n = 39). RESULTS: i-TRES performance closely followed that of T-RES, with no group differences found in the main trends, identification of emotions, selective attention, and integration. CONCLUSIONS: The design of iT-RES mapped the main challenges for remote auditory assessments, and solutions taken to address them. We hope that this will encourage further efforts for telehealth adaptations of clinical services, to meet the needs of special populations and avoid halting scientific research.


Assuntos
Audiologia/métodos , Audiometria da Fala/métodos , COVID-19 , Telemedicina/métodos , Reconhecimento de Voz , Adulto , Atenção , Emoções , Feminino , Humanos , Masculino , Quarentena , SARS-CoV-2 , Semântica , Percepção da Fala , Adulto Jovem
5.
Int J Audiol ; 59(3): 195-207, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31663391

RESUMO

Objective: Understanding communication difficulties related to tinnitus, by identifying tinnitus-related differences in the perception of spoken emotions, focussing on the roles of semantics (words), prosody (tone of speech) and their interaction.Study sample and design: Twenty-two people-with-tinnitus (PwT) and 24 people-without-tinnitus (PnT) listened to spoken sentences made of different combinations of four discrete emotions (anger, happiness, sadness, neutral) presented in the prosody and semantics (Test for Rating Emotions in Speech). In separate blocks, listeners were asked to attend to the sentence as a whole, integrating both speech channels (gauging integration), or to focus on one channel only (gauging identification and selective attention). Their task was to rate how much they agree the sentence conveys each of the predefined emotions.Results: Both groups identified emotions similarly, and performed with similar failures of selective attention. Group differences were found in the integration of channels. PnT showed a bias towards prosody, whereas PwT weighed both channels equally.Conclusions: Tinnitus appears to impact the integration of the prosodic and semantic channels. Three possible sources are suggested: (a) sensory: tinnitus may reduce prosodic cues. (b) Cognitive: tinnitus-related reduction in cognitive processing.


Assuntos
Emoções , Semântica , Percepção da Fala , Zumbido/psicologia , Adulto , Atenção , Compreensão , Sinais (Psicologia) , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Fala , Análise e Desempenho de Tarefas
6.
J Speech Lang Hear Res ; 62(4S): 1188-1202, 2019 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-31026192

RESUMO

Purpose We aim to identify the possible sources for age-related differences in the perception of emotion in speech, focusing on the distinct roles of semantics (words) and prosody (tone of speech) and their interaction. Method We implement the Test for Rating of Emotions in Speech ( Ben-David, Multani, Shakuf, Rudzicz, & van Lieshout, 2016 ). Forty older and 40 younger adults were presented with spoken sentences made of different combinations of 5 emotional categories (anger, fear, happiness, sadness, and neutral) presented in the prosody and semantics. In separate tasks, listeners were asked to attend to the sentence as a whole, integrating both speech channels, or to focus on 1 channel only (prosody/semantics). Their task was to rate how much they agree the sentence is conveying a predefined emotion. Results (a) Identification of emotions: both age groups identified presented emotions. (b) Failure of selective attention: both age groups were unable to selectively attend to 1 channel when instructed, with slightly larger failures for older adults. (c) Integration of channels: younger adults showed a bias toward prosody, whereas older adults showed a slight bias toward semantics. Conclusions Three possible sources are suggested for age-related differences: (a) underestimation of the emotional content of speech, (b) slightly larger failures to selectively attend to 1 channel, and (c) different weights assigned to the 2 speech channels.


Assuntos
Fatores Etários , Envelhecimento/psicologia , Emoções/fisiologia , Semântica , Percepção da Fala/fisiologia , Adulto , Idoso , Atenção , Feminino , Humanos , Masculino , Acústica da Fala
7.
J Speech Lang Hear Res ; 59(1): 72-89, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26903033

RESUMO

PURPOSE: Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. METHOD: We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics). RESULTS: We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech. CONCLUSIONS: Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.


Assuntos
Emoções , Psicolinguística , Semântica , Percepção da Fala , Feminino , Humanos , Testes de Linguagem , Masculino , Modelos Psicológicos , Reconhecimento Fisiológico de Modelo , Testes Psicológicos , Adulto Jovem
9.
J Alzheimers Dis ; 38(4): 923-38, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24100125

RESUMO

Selective attention, an essential part of daily activity, is often impaired in people with Alzheimer's disease (AD). Usually, it is measured by the color-word Stroop test. However, there is no universal agreement whether performance on the Stroop task changes significantly in AD patients; or if so, whether an increase in Stroop effects reflects a decrease in selective attention, a slowing in generalized speed of processing (SOP), or is the result of degraded color-vision. The current study investigated the impact of AD on Stroop performance and its potential sources in a meta-analysis and mathematical modeling of 18 studies, comparing 637 AD patients with 977 healthy age-matched participants. We found a significant increase in Stroop effects for AD patients, across studies. This AD-related change was associated with a slowing in SOP. However, after correcting for a bias in the distribution of latencies, SOP could only explain a moderate portion of the total variance (25%). Moreover, we found strong evidence for an AD-related increase in the latency difference between naming the font-color and reading color-neutral stimuli (r2 = 0.98). This increase in the dimensional imbalance between color-naming and word-reading was found to explain a significant portion of the AD-related increase in Stroop effects (r2 = 0.87), hinting on a possible sensory source. In conclusion, our analysis highlights the importance of controlling for sensory degradation and SOP when testing cognitive performance and, specifically, selective attention in AD patients. We also suggest possible measures and tools to better test for selective attention in AD.


Assuntos
Doença de Alzheimer/diagnóstico , Doença de Alzheimer/psicologia , Atenção/fisiologia , Percepção de Cores/fisiologia , Tempo de Reação/fisiologia , Teste de Stroop , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...